Loading...

What Is a Data Breach and Why Should Your Organization Care?

by Jon Mostajo 12 min read January 18, 2024

The threat of data breach is constant in our modern, digital world. And as technology advances, so do the strategies and tactics of malicious actors seeking ways to monetize the vulnerabilities of organizations. It’s not a matter of if, but when, a data breach could impact your organization, and it is important for businesses to understand how to operate in it.

What is a Data Breach?

For many organizations, a data breach is arguably one of the greatest threats to prevent. What is a data breach? Imagine your organization as a fortress, safeguarding a treasure trove of sensitive information—customer data, financial records, proprietary algorithms. A data breach is the unwelcome intrusion into this fortress, where unauthorized individuals gain access to confidential information, often with malicious intent. This can encompass many types of data, including personal identification information (PII), financial data, and intellectual property. Classifications of breaches can vary from intentional cyberattacks to inadvertent exposure due to system vulnerabilities or human error.

To grasp the gravity of data breaches, Businesses face tangible consequences when their defenses are breached, and there are no signs of it slowing down.

The frequency and severity of data breaches are alarming. According to recent studies¹, the healthcare sector experienced a 55% increase in data breaches in 2022. No business is immune to the evolving threat landscape especially companies that capture customer data and are also inherently the stewards of this data.

Understanding the landscape of data breaches will help you better fortify your business against a breach. In the next sections, we’ll explore the causes, impacts, post-breach response strategies, and preventative tactics businesses can employ to safeguard their data.

Causes of Data Breaches

Human error

Even the most well-intentioned employees can become the weak link in an organization’s security chain. According to the “2023 Verizon Data Breach Investigations Report,” 74% of data breaches involve a human element². Investing in comprehensive training programs is essential to foster a culture of cybersecurity awareness and mitigate the risk of employee-related mistakes.

Cybersecurity vulnerabilities

The digital landscape is rife with potential vulnerabilities, and cybercriminals are adept at exploiting them. Regular cybersecurity assessments, prompt system updates, and the implementation of robust security protocols are recommended proactive measures to fortify against breaches that capitalize on system vulnerabilities.

Insider threats

Data breaches can originate from within, whether through disgruntled employees with malicious intent or well-meaning staff who inadvertently compromise security. Gurucul’s “2023 Insider Threat Report” highlights that 60% of organizations experienced insider-related incidents in the past year³. Establishing stringent access controls, closely monitoring user activities, and implementing employee education programs are vital steps to mitigate the risks associated with insider threats.

Weak and Stolen Passwords

Weak and stolen passwords stand as one of the most common gateways for data breaches. Cybercriminals exploit individuals who use easily guessable passwords or recycle them across multiple platforms. This creates a vulnerability that can be easily exploited through automated attacks. Ensuring robust password policies, employing multi-factor authentication, and regularly updating credentials are necessary measures to thwart these breaches and safeguard sensitive information.

Malware

The insidious world of malware is a persistent threat to data security. Malicious software, often disguised as innocuous files or links, infiltrates systems, and wreak havoc by compromising data integrity and confidentiality. Malware can then swiftly spread, leading to unauthorized access and data exfiltration. Regularly updating antivirus software, conducting thorough system scans, and educating employees about the dangers of clicking on suspicious links are pivotal defenses against malware-driven breaches.

Social Engineering

Social engineering has emerged as a cunning and effective tactic in data breaches, such as manipulating individuals to divulge confidential information willingly. Whether through phishing emails, deceptive phone calls, or impersonation, cybercriminals exploit human trust to gain unauthorized access. Raising awareness among employees about the dangers of social engineering, implementing rigorous verification processes, and fostering a culture of skepticism can fortify an organization’s defenses against these subtle yet potent attacks.

Physical Attacks

While the digital realm often takes center stage, physical attacks on data infrastructure remain a tangible and underestimated risk. Breaches can occur through unauthorized access to servers, theft of physical storage devices, or tampering with network equipment. Implementing stringent access controls, employing surveillance systems, and securing physical infrastructure are crucial steps to mitigate the threat of data breaches stemming from physical incursions. Building digital and physical protective measures can help with your defense against the multifaceted landscape of data breaches.

Impacts on Businesses

Financial repercussions

Data breaches are costly to businesses with immediate and enduring consequences. The “Cost of a Data Breach Report 2023” by IBM reported that the average cost of a data breach was $4.45 million per organization⁴. Long-term financial implications include loss of customers, diminished revenue streams, and increased cybersecurity investments to rebuild trust and fortify defenses against future breaches.

Reputational damage

The fallout from a data breach extends beyond the balance sheet, leaving an indelible mark on a business’s reputation. According to a 2023 survey by Vercara, 66% of U.S. consumers would not trust a company that falls victim to a data breach with their data. Rebuilding trust with transparent communication, swift remediation, and proactive measures to prevent future breaches is essential, demonstrating a commitment to safeguarding sensitive information.

Operational disruptions

Data breaches causes disruptions in the operations of daily business activities. It takes an average of 73 days to contain a cyber-attack according to the Cost of a Data Breach Report 2023 from IBM⁴. Swift recovery requires a meticulous balance between addressing the breach’s immediate impact and resuming normal operations to minimize further operational strain.

Legal and regulatory implications

The legal aftermath of a data breach involves navigating a complex landscape of regulations and compliance standards. In the United States, data breaches may trigger legal consequences under various state laws. For instance, the California Consumer Privacy Act (CCPA) allows for fines ranging from $100 to $750 per consumer per incident⁵. Ensuring adherence to data protection laws, promptly reporting breaches to regulatory authorities, and implementing robust security measures become top priorities in avoiding the legal quagmire that often follows a data breach.

Notable data breaches

  1. Yahoo! (2014):
    • The personal information of 3 billion people was exposed, including names, birth dates, passwords, and phone numbers.
    • Cause: It is believed that the hack originated through a phishing email sent to a Yahoo! employee. Through this phishing email, it’s believed the hackers were able to access user databases and tools.⁶
    • Cost: $117.5 million in settlements and $350 million off its sale price to Verizon⁷
  2. Marriott International (2018):
    • Information of approximately 500 million guests was compromised, including names, contact details, passport numbers, and travel details.
    • Cause: A cyber-espionage campaign linked to a state-sponsored actor. Attackers gained access to Marriott’s Starwood guest reservation database due to vulnerabilities in the system.⁸
    • Cost: Over $100 million for remediation efforts and regulatory fines.⁹
  3. Capital One (2019):
    • 106 million customers’ personal information, including credit card applications and Social Security numbers, was exposed.
    • Cause: A misconfigured web application firewall that allowed a hacker to exploit a server-side request forgery vulnerability, leading to unauthorized access and the theft of sensitive customer data.¹⁰
    • Cost: Estimated between $100 million and $150 million in 2019 alone.¹¹
  4. SolarWinds (2020):
    • Hackers compromised the software supply chain, affecting numerous government agencies and major corporations globally.
    • Cause: The SolarWinds breach was a sophisticated supply chain attack where malicious actors compromised the software update process, injecting malware into software updates distributed by SolarWinds, allowing them access to numerous government and corporate networks.¹²
    • Cost: At least $18 million¹³
  5. JBS USA (2021):
    • The ransomware attack on the world’s largest meat processor disrupted operations and impacted the company’s IT systems.
    • Cause: A ransomware attack, where cybercriminals exploited vulnerabilities in the company’s IT systems to encrypt data and demand a ransom for its release, causing significant disruptions to operations.¹⁴
    • Cost: $11 million ransom paid to hackers from JBS to restore their IT systems.

Post-breach response

Assessment and Damage Control

Immediate Action Steps

In the event of a data breach, the immediacy of response becomes one factor in determining the outcome. Swift and decisive actions during the initial moments can be instrumental in preventing the situation from escalating. The primary focus at this stage is isolating the affected systems, swiftly disconnecting compromised servers and devices from the network. This can help stop unauthorized access and establishes the foundation for a more concentrated and effective response. Alerting the incident response team, IT personnel, and relevant stakeholders promptly is also worth considering to help gain control over the situation.

Forensic Analysis

Understanding the who, what, and how of an incident is also an important step following a breach. In this context, involving forensic experts in a meticulous analysis is prudent. These professionals specialize in unraveling the intricacies of the breach, identifying entry points, and tracing the movements of attackers within your systems.

The significance of forensic analysis extends beyond mere identification; it serves as the groundwork for prevention. Through a comprehensive study of the employed attack vectors and techniques, organizations can enhance their cybersecurity infrastructure. This process of gathering critical information about the breach contributes to the ability to preempt similar incidents, fostering a more resilient stance against evolving cyber threats.

Communication Strategy

Internal Communication

Effective internal communication plays a pivotal role in building a resilient response framework. In the early stages of a crisis, employees emerge as the initial line of defense. Clearly conveying the severity of the situation provides them with a comprehensive understanding of the impact and the organization’s devised response plan. This also empowers the workforce, fostering a sense of unity within the organization and help the organization navigate challenges ahead cohesively, reinforcing its resilience in the face of adversity.

External Communication

External communication holds equal importance, reaching beyond the organization to customers, partners, and stakeholders. It’s essential to recognize the significance of constructing messages with transparency, honesty, and a proactive stance. Silence or ambiguity can intensify the repercussions, so prioritizing openness becomes foundational for rebuilding trust. Being timely and forthright in sharing information about the breach and the steps taken to rectify the situation is generally a good strategy when engaging with partners and stakeholders. This approach not only informs but can also mold the perception of the organization’s dedication to security and integrity following the aftermath of a breach with a strategic and forward-thinking mindset.

Legal and Regulatory Compliance

Notification Requirements

Within the regulatory framework, a prompt response is an important post-breach step for organizations. It may first involve comprehensively detailing the legal obligations surrounding breach notifications to both regulatory authorities and affected individuals. It’s essential to recognize the variability in requirements across different regions and industries, underscoring the importance of remaining well-informed about these specific nuances.

Timeliness of notifications is also factor for organizations to consider. Numerous jurisdictions impose substantial fines for delays in reporting, making it essential for organizations to adhere to strict timelines. Transparency holds equal weight, necessitating clear communication about the extent of the breach, the nature of compromised information, and the specific measures being implemented to address the situation. This approach can help in being compliant with legal standards and plays a vital role in fostering trust among those directly impacted by the breach.

Legal Counsel Engagement

Organizations generally seek the support of legal counsel to help navigate the intricate legal aftermath of a data breach. Legal experts can help an organization through potential lawsuits and regulatory fines.

Engaging legal experts early allows their insights to guide the overall strategy, shaping everything from the communication plan to the recovery efforts. With early legal counsel support, the organization can be proactive in addressing legal challenges, potentially mitigating the severity of consequences that may arise.

Recovery and Remediation

IT System Restoration

The intricacies of IT system restoration mirror the reconstruction of a fortress following an intrusion. Restoring affected IT systems to normal functionality involves comprehensive measures such as thorough system checks, vulnerability assessments, and the eradication of any residual traces left by a breach.

Additionally, organizations generally look to enhance security measures during the recovery phase. Simply reverting to the pre-breach state is not enough; instead, the recovery process serves as an opportunity to accept vulnerabilities in old systems and bolster defenses. This entails updating and patching systems, reassessing access controls, and contemplating the incorporation of advanced threat detection tools. Such measures collectively work to minimize the risk of a recurrence and contribute to an overall fortified cybersecurity posture.

Prevention Strategies

Best practices for securing sensitive data

Securing sensitive data is important in the age of relentless cyber threats. Employing encryption protocols, conducting regular security audits, and limiting access privileges are foundational best practices. These proactive measures help create a robust defense, forming an intricate web that shields critical information from potential breaches.

Employee training programs to mitigate human error

Human error remains a significant contributor to data breaches. Implementing comprehensive employee training programs can be helpful in cultivating a security-conscious workforce and mitigating human error-caused vulnerabilities. From recognizing phishing attempts to practicing proper password hygiene, a well-informed staff acts as the first line of defense and can significantly reduce the likelihood of unintentional security lapses.

Implementing robust cybersecurity measures

The cornerstone of any data breach prevention strategy is the implementation of robust cybersecurity measures. This includes advanced intrusion detection systems, firewalls, and regular software updates. Proactively addressing vulnerabilities and staying abreast of the latest cybersecurity advancements help fortify an organization’s digital perimeter, creating an environment that is inherently resistant to malicious infiltrations.

Staying abreast of emerging trends

Staying ahead of data breach threats requires a keen awareness of emerging trends. From sophisticated phishing techniques to novel forms of malware, businesses should continuously adapt their cybersecurity strategies against evolving tactics employed by cybercriminals.

The dynamic nature of the cybersecurity landscape demands constant innovation. Adopting cutting-edge technologies like artificial intelligence for threat detection and investing in predictive analytics allows businesses to stay one step ahead, proactively identifying and neutralizing potential threats before they escalate.

Collaboration and information-sharing within industries

In the face of evolving cyber threats, collaboration is a powerful defense. Establishing networks for information-sharing within industries enables businesses to benefit from collective intelligence. By sharing best practices and threat intelligence, organizations can collectively strengthen their defenses against the ever-changing data breach landscape.

Takeaway

Data breaches are a persistent threat for all businesses capturing and storing personal identifiable information. Such businesses are inherently the stewards of this data and must protect that data to avoid bad actors gaining access for malicious intent. Knowing what a data breach is just the first step of protecting that data, and it is key to take action. From securing sensitive data to fostering a cybersecurity-aware workforce, businesses must not merely react to the escalating threat of data breaches but proactively strive to create an impenetrable shield around their valuable information.

Visit our website for more information about our offerings and how Experian can help you prepare and respond to data breaches.


¹Hippa Journal, 55% of Healthcare Organizations Suffered a Third-Party Data Breach in the Past Year [2022]
²Verizon, 2023 Verizon Data Breach Investigations Report
³Gurucul, 2023 Insider Threat Report
IBM, Cost of a Data Breach Report 2023
Office of the Attorney General, California Consumer Privacy Act (CCPA)
CSO, INside the Russian hack of Yahoo: How they did it
BPB Online, Yahoo Data Breach: What Actually Happened?
CSO, Marriott data breach FAQ: How did it happen and what was the impact?
Cybersecurity Dive, Marriott finds financial reprieve in reduced GDPR penalty
¹⁰Investopedia, Capital One Data Breach Impacts 106 Million Customers
¹¹CNET, Capital One $190 Million Data Breach Settlement: Today Is the Last Day to Claim Money
¹²Tech Target, SolarWinds hack explained: Everything you need to know
¹³Reuters, SolarWinds says dealing with hack fallout cost at least $18 million
¹⁴BBC, Meat giant JBS pays $11m in ransom to resolve cyber-attack

Related Posts

A new reality for screening providers Everything about the candidate checked out. Their resume reflected the right experience. Their references confirmed it. The background screening process came back clean. From outside, there was no reason to hesitate. So, the company didn’t.  But within weeks, small inconsistencies began to surface. The employee struggled in ways that didn’t match their credentials. Follow-up questions led to vague answers. Eventually, a deeper review uncovered the issue; this wasn’t just a case of exaggeration. It was candidate fraud. And increasingly, it’s not just individuals acting alone.  In a widely reported scheme, foreign operatives posed as legitimate remote IT workers, using stolen identities and AI-assisted interviews to secure jobs at major Fortune 500 companies. Once hired, access was handed off, allowing bad actors to infiltrate corporate systems and generate millions in illicit revenue. In one case, a single individual funneled over $17 million to a foreign operation. These weren’t obvious scams. The candidates passed interviews. They cleared checks. And that’s exactly the point. For background screening and verification providers, this shift presents both a challenge and an opportunity. As candidate fraud becomes more sophisticated, your clients are no longer just looking to verify records – they’re looking to trust identity itself, and they’re looking to you to help them do it. The assumption that no longer holds For decades, hiring has relied on a simple premise: verify the records, resume, and you can trust the candidate. That model worked when identity was easier to validate in person. But in today’s digital-first hiring environment, identity can oftentimes be asserted, not proven. At the same time, identity-based fraud is accelerating. Synthetic identity fraud alone accounts for billions in annual losses, and employers are increasingly encountering candidates whose identities are far more difficult to validate than their resumes. This creates a critical disconnect: Organizations are still verifying records, but those records may be tied to identities that were never legitimate to begin with. Increasingly, they’re turning to their screening partners to close that gap. The reality of candidate fraud 31% of employers have interviewed candidates using a false identity Only 19% feel confident they can detect fraud in hiring 1 in 4 companies report losses of$50K+from fraudulent hires Why candidate fraud is getting harder to see The nature of candidate fraud has fundamentally changed. At one end of the spectrum, companies are still dealing with candidates who falsify resumes, costing businesses time and money when the truth comes to light later. But at the other end, the threat has escalated dramatically. Coordinated fraud rings are now using stolen identities and AI-assisted interviews to place individuals into remote roles, sometimes without ever revealing their identity. And this isn’t slowing down. According to Gartner, by 2028, 1 in 4 candidates could be fake, driven by AI, remote hiring, and identity manipulation. For screening providers, this introduces a new level of complexity. The challenge is no longer just delivering verified records; it’s helping clients surface risks that traditional screening processes were not designed to identify. What traditional screening still gets right None of this diminishes the importance of pre-employment screening. Verifying employment history, education, and background remains a critical part of responsible hiring, and it should. But even the most thorough screening process is designed to answer a specific question: Do the records align with the identity provided? What it does not answer is the question that matters most now: Is that identity real? That gap between record verification and identity validation is where modern fraud operates. And it represents an opportunity for screeners to expand their role from record validation to helping enable stronger identity confidence. The cost of believing everything is working When fraud moves through the hiring process undetected, the consequences aren’t always immediate, but they can be significant. There are financial risks, compliance exposure and potential access to sensitive systems. But there’s also a more subtle —and often overlooked — impact: The assumption that existing processes are working as intended. When fraudulent candidates pass through screening, it reinforces confidence in processes that may not be equipped for today’s threat landscape. Over time, that false sense of security can become a vulnerability. From screening provider to strategic partner As hiring evolves, so do expectations. Employers are no longer just looking for faster background checks - they’re looking for greater confidence in who they’re hiring. This shift creates an opportunity for screening providers to move upstream in the hiring process. By introducing identity verification earlier in the workflow, providers can help clients detect candidate fraud sooner, reduce downstream risk, and strengthen the integrity of hiring decisions.  More importantly, it allows providers to differentiate their offerings in an increasingly competitive market, shifting from a transactional service to a more strategic capability. A shift in thinking: Identity before everything else To address modern candidate fraud, organizations don’t just need better tools; they need a different starting point. Instead of beginning with records, leading providers are beginning with identity. They are asking a more fundamental question earlier in the process:  Is this person who they say they are? Is this person a real, consistent and verifiable person? When that foundation is established, everything that follows becomes more meaningful. Background checks become more reliable. Verification becomes more consistent. And the ability to detect candidate fraud improves, not because the process is longer, but because it’s more informed. In this model, identifying potential fraud becomes proactive rather than reactive. Why identity verification matters more now than ever The shift to remote and digital hiring hasn’t just changed how companies hire – it’s changed how fraud occurs. Today, a significant portion of fraudulent activity targets the employment process, making it a key point of exposure for identity misuse. In fact, 45% of all false document submissions now occur in the employment sector. In many cases, candidates who falsify information still progress through hiring workflows. A study revealed that 70% of candidates who falsify information still get hired. This reinforces today’s reality: Fraud is no longer slipping through the cracks; it’s moving through the front door. How Experian helps close the identity gap Experian® helps background screeners and verification providers bridge the gap between who a candidate claims to be and who they are. By combining identity verification, fraud detection, and verification solutions, Experian enables providers to enhance their existing solutions – without disrupting their workflows. This allows you to extend your value beyond traditional screening, help clients detect candidate fraud earlier, and strengthen confidence in hiring outcomes.   The result is not just better screening, it’s a stronger strategic position in your clients’ hiring ecosystem, one that reduces risk while improving speed and confidence. Candidate fraud isn’t an edge case anymore. It reflects a broader shift in how identity works in a digital world. And while traditional screening remains essential, it may not be sufficient on its own. Because if identity is uncertain, every subsequent check is built on unstable ground. But when identity is established earlier in the process, everything that follows becomes more dependable. Don’t just verify the candidate records, verify the identityLearn how Experian helps screening providers embed identity verification at the start of the hiring journey to help detect candidate fraud earlier, reduce risk, and strengthen screening outcomes.  Explore Experian’s Fraud Prevention Playbook for Pre-Employment Screening FAQs

by Kim Le 12 min read March 26, 2026

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 12 min read March 20, 2026

Fraud is evolving faster than ever, driven by digitalization, real-time payments and increasingly sophisticated scams. For Warren Jones and his team at Santander Bank, staying ahead requires more than tools. It requires the right partner. The partnership with Santander Bank began nearly a decade ago, during a period of rapid change in the fraud and banking landscape. Since then, the relationship has grown into a long-term collaboration focused on continuous improvement and innovation. Experian products helped Santander address one of its most pressing operational challenges: a high-volume manual review queue for new account applications. While the vast majority of alerts in the queue were fraudulent and ultimately declined, a small percentage represented legitimate customers whose account openings were delayed. This created inefficiencies for staff and a poor first impression of genuine applicants. We worked alongside Santander to tackle this challenge head-on, transforming how applications were reviewed, how fraud was detected and how legitimate customers were approved. In addition to fraud prevention, implementing Experian's Ascend PlatformTM, with its intuitive user experience and robust data environment, has unlocked additional value across the organization. The platform supports multiple use cases, enabling collaboration between fraud and marketing teams to align strategies based on actionable insights. Learn more about our Ascend Platform

by Zohreen Ismail 12 min read February 18, 2026